- Home
- Search Results
- Page 1 of 1
Search for: All records
-
Total Resources5
- Resource Type
-
0005000000000000
- More
- Availability
-
41
- Author / Contributor
- Filter by Author / Creator
-
-
Trienes, Jan (5)
-
Xu, Wei (4)
-
Joseph, Sebastian (3)
-
Li, Junyi Jessy (3)
-
Seifert, Christin (3)
-
Chen, Lily (2)
-
Coers, Monika (2)
-
Lo, Kyle (2)
-
Schlötterer, Jörg (2)
-
Wallace, Byron (2)
-
Wallace, Byron C (2)
-
Antony, Sebastian (1)
-
Göke, Hannah (1)
-
Junyi, Jessy L (1)
-
Li, Jessy (1)
-
Louisa, Hannah (1)
-
Scholotterer, Jorg (1)
-
#Tyler Phillips, Kenneth E. (0)
-
#Willis, Ciara (0)
-
& Abreu-Ramos, E. D. (0)
-
- Filter by Editor
-
-
& Spizer, S. M. (0)
-
& . Spizer, S. (0)
-
& Ahn, J. (0)
-
& Bateiha, S. (0)
-
& Bosch, N. (0)
-
& Brennan K. (0)
-
& Brennan, K. (0)
-
& Chen, B. (0)
-
& Chen, Bodong (0)
-
& Drown, S. (0)
-
& Ferretti, F. (0)
-
& Higgins, A. (0)
-
& J. Peters (0)
-
& Kali, Y. (0)
-
& Ruiz-Arias, P.M. (0)
-
& S. Spitzer (0)
-
& Sahin. I. (0)
-
& Spitzer, S. (0)
-
& Spitzer, S.M. (0)
-
(submitted - in Review for IEEE ICASSP-2024) (0)
-
-
Have feedback or suggestions for a way to improve these results?
!
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Large Language Models (LLMs) excel at text summarization, a task that requires models to select content based on its importance. However, the exact notion of salience that LLMs have internalized remains unclear. To bridge this gap, we introduce an explainable framework to systematically derive and investigate information salience in LLMs through their summarization behavior. Using length-controlled summarization as a behavioral probe into the content selection process, and tracing the answerability of Questions Under Discussion throughout, we derive a proxy for how models prioritize information. Our experiments on 13 models across four datasets reveal that LLMs have a nuanced, hierarchical notion of salience, generally consistent across model families and sizes. While models show highly consistent behavior and hence salience patterns, this notion of salience cannot be accessed through introspection, and only weakly correlates with human perceptions of information salience.more » « lessFree, publicly-accessible full text available July 1, 2026
-
Trienes, Jan; Joseph, Sebastian; Scholotterer, Jorg; Seifert, Christin; Lo, Kyle; Xu, Wei; Wallace, Byron C; Li, Jessy (, Association for Computational Linguistics)
-
Antony, Sebastian; Chen, Lily; Trienes, Jan; Louisa, Hannah; Coers, Monika; Xu, Wei; Wallace, Byron C; Junyi, Jessy L (, Association for Computational Linguistics)
-
Trienes, Jan; Joseph, Sebastian; Schlötterer, Jörg; Seifert, Christin; Lo, Kyle; Xu, Wei; Wallace, Byron; Li, Junyi Jessy (, Association for Computational Linguistics)
-
Joseph, Sebastian; Chen, Lily; Trienes, Jan; Göke, Hannah; Coers, Monika; Xu, Wei; Wallace, Byron; Li, Junyi Jessy (, Association for Computational Linguistics)
An official website of the United States government

Full Text Available